Extensible MultiModal Annotation (EMMA)

نویسنده

  • Max Froumentin
چکیده

This talk will introduce the W3C Multimodal Interaction Activity, whose goal is to design a framework of specifications to enable access to the Web using multi-modal interaction. In particular we will introduce the Extensible MultiModal Annotation (EMMA) language specification. EMMA is an XML language for describing the interpretation of user input, combining transcriptions of raw signals into words with metadata to help applications resolve uncertainties and contradictions in interpretations. We will also discuss the issues encountered by the Working Group as the language is being developed, such as whether to use RDF or not, how best to combine interpretations, or what metadata properties to represent.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using DiAML and ANVIL for multimodal dialogue annotation

This paper shows how interoperable dialogue act annotations, using the multidimensional annotation scheme and the markup language DiAML of ISO standard 24617-2, can conveniently be obtained using the newly implemented facility in the ANVIL annotation tool to produce XML-based output directly in the DiAML format.

متن کامل

Annotating and Measuring Multimodal Behaviour - Tycoon Metrics in the Anvil Tool

We demonstrate how the Tycoon framework can be put to practice with the Anvil tool in a concrete case study. Tycoon offers a coding scheme and analysis metrics for multimodal communication scenarios. Anvil is a generic, extensible and ergonomically designed annotation tool for videos. In this paper, we describe the Anvil tool, the Tycoon scheme/metrics, and their implementation in Anvil for a v...

متن کامل

MINT.tools: tools and adaptors supporting acquisition, annotation and analysis of multimodal corpora

This paper presents a collection of tools (and adaptors for existing tools) that we have recently developed, which support acquisition, annotation and analysis of multimodal corpora. For acquisition, an extensible architecture is offered that integrates various sensors, based on existing connectors (e.g. for motion capturing via VICON, or ART) and on connectors we contribute (for motion trackin...

متن کامل

The NITE Object Model Library for Handling Structured Linguistic Annotation on Multimodal Data Sets

The NITE Object Model Library is an implemented set of routines for loading, accessing, manipulating, and serializing linguistic data. It is similar in spirit to the data handling provided by the Annotation Graph Toolkit, but is aimed at data that is heavily cross-annotated with structured information, and thus chooses higher expressivity at the cost of processing speed. We describe our open-so...

متن کامل

Representing Multimodal Linguistics Annotated data

The question of interoperability for linguistic annotated resources requires to cover different aspects. First, it requires a representation framework making it possible to compare, and potentially merge, different annotation schema. In this paper, a general description level representing the multimodal linguistic annotations is proposed. It focuses on time and data content representation: This...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004